106 research outputs found

    A three-photon head-mounted microscope for imaging all layers of visual cortex in freely moving mice

    Get PDF
    Recent advances in head-mounted microscopes have enabled imaging of neuronal activity using genetic-tools in freely moving mice but these microscopes are restricted to recording in minimally lit arenas and imaging upper cortical layers. Here we built a 2 gram, three-photon excitationbased microscope, containing a z-drive that enabled access to all cortical layers while mice freely behaved in a fully lit environment. We show that neuronal population activity in cortical layer-4 and layer-6 was differentially modulated by lit and dark conditions during free exploration

    A three-photon head-mounted microscope for imaging all layers of visual cortex in freely moving mice

    Get PDF
    Advances in head-mounted microscopes have enabled imaging of neuronal activity using genetic tools in freely moving mice but these microscopes are restricted to recording in minimally lit arenas and imaging upper cortical layers. Here we built a 2-g, three-photon excitation-based microscope, containing a z-drive that enabled access to all cortical layers while mice freely behaved in a fully lit environment. The microscope had on-board photon detectors, robust to environmental light, and the arena lighting was timed to the end of each line-scan, enabling functional imaging of activity from cortical layer 4 and layer 6 neurons expressing jGCaMP7f in mice roaming a fully lit or dark arena. By comparing the neuronal activity measured from populations in these layers we show that activity in cortical layer 4 and layer 6 is differentially modulated by lit and dark conditions during free exploration

    Measurement of arbitrary scan patterns for correction of imaging distortions in laser scanning microscopy

    Get PDF
    : Laser scanning microscopy requires beam steering through relay and focusing optics at sub-micron precision. In light-weight mobile systems, such as head mounted multiphoton microscopes, distortion and imaging plane curvature management is unpractical due to the complexity of required optic compensation. Thus, the resulting scan pattern limits anatomical fidelity and decreases analysis algorithm efficiency. Here, we present a technique that reconstructs the three-dimensional scan path only requiring translation of a simple fluorescent test probe. Our method is applicable to any type of scanning instrument with sectioning capabilities without prior assumptions regarding origin of imaging deviations. Further, we demonstrate that the obtained scan pattern allows analysis of these errors, and allows to restore anatomical accuracy relevant for complementary methods such as motion correction, further enhancing spatial registration and feature extraction. <br

    Anatomically-based skeleton kinetics and pose estimation in freely-moving rodents

    Get PDF
    Forming a complete picture of the relationship between neural activity and body kinetics requires quantification of skeletal joint biomechanics during behavior. However, without detailed knowledge of the underlying skeletal motion, inferring joint kinetics from surface tracking approaches is difficult, especially for animals where the relationship between surface anatomy and skeleton changes during motion. Here we developed a videography-based method enabling detailed three-dimensional kinetic quantification of an anatomically defined skeleton in untethered freely-behaving animals. This skeleton-based model has been constrained by anatomical principles and joint motion limits and provided skeletal pose estimates for a range of rodent sizes, even when limbs were occluded. Model-inferred joint kinetics for both gait and gap-crossing behaviors were verified by direct measurement of limb placement, showing that complex decision-making behaviors can be accurately reconstructed at the level of skeletal kinetics using our anatomically constrained model

    Estimation of skeletal kinematics in freely moving rodents

    Get PDF
    Forming a complete picture of the relationship between neural activity and skeletal kinematics requires quantification of skeletal joint biomechanics during free behavior; however, without detailed knowledge of the underlying skeletal motion, inferring limb kinematics using surface-tracking approaches is difficult, especially for animals where the relationship between the surface and underlying skeleton changes during motion. Here we developed a videography-based method enabling detailed three-dimensional kinematic quantification of an anatomically defined skeleton in untethered freely behaving rats and mice. This skeleton-based model was constrained using anatomical principles and joint motion limits and provided skeletal pose estimates for a range of body sizes, even when limbs were occluded. Model-inferred limb positions and joint kinematics during gait and gap-crossing behaviors were verified by direct measurement of either limb placement or limb kinematics using inertial measurement units. Together we show that complex decision-making behaviors can be accurately reconstructed at the level of skeletal kinematics using our anatomically constrained model

    Freely-moving mice visually pursue prey using a retinal area with least optic flow

    Get PDF
    Mice have a large visual field that is constantly stabilized by vestibular ocular reflex driven eye rotations that counter head-rotations. While maintaining their extensive visual coverage is advantageous for predator detection, mice also track and capture prey using vision. However, in the freely moving animal quantifying object location in the field of view is challenging. Here, we developed a method to digitally reconstruct and quantify the visual scene of freely moving mice performing a visually based prey capture task. By isolating the visual sense and combining amouse eye optic model with the head and eye rotations, the detailed reconstruction of the digital environment and retinal features were projected onto the corneal surface for comparison, and updated throughout the behavior. By quantifying the spatial location of objects in the visual scene and their motion throughout the behavior, we show that the image of the prey is maintained within a small area, the functional focus, in the upper-temporal part of the retina. This functional focus coincides with a region of minimal optic flow in the visual field and consequently minimal motion-induced image blur during pursuit, as well as the reported high density-region of Alpha-ON sustained retinal ganglion cells

    Visual pursuit behavior in mice maintains the pursued prey on the retinal region with least optic flow

    Get PDF
    Mice have a large visual field that is constantly stabilized by vestibular ocular reflex (VOR) driven eye rotations that counter head-rotations. While maintaining their extensive visual coverage is advantageous for predator detection, mice also track and capture prey using vision. However, in the freely moving animal quantifying object location in the field of view is challenging. Here, we developed a method to digitally reconstruct and quantify the visual scene of freely moving mice performing a visually based prey capture task. By isolating the visual sense and combining a mouse eye optic model with the head and eye rotations, the detailed reconstruction of the digital environment and retinal features were projected onto the corneal surface for comparison, and updated throughout the behavior. By quantifying the spatial location of objects in the visual scene and their motion throughout the behavior, we show that the prey image consistently falls within a small area of the VOR-stabilized visual field. This functional focus coincides with the region of minimal optic flow within the visual field and consequently area of minimal motion-induced image-blur, as during pursuit mice ran directly toward the prey. The functional focus lies in the upper-temporal part of the retina and coincides with the reported high density-region of Alpha-ON sustained retinal ganglion cells.Mice have a lot to keep an eye on. To survive, they need to dodge predators looming on land and from the skies, while also hunting down the small insects that are part of their diet. To do this, they are helped by their large panoramic field of vision, which stretches from behind and over their heads to below their snouts. To stabilize their gaze when they are on the prowl, mice reflexively move their eyes to counter the movement of their head: in fact, they are unable to move their eyes independently. This raises the question: what part of their large visual field of view do these rodents use when tracking a prey, and to what advantage? This is difficult to investigate, since it requires simultaneously measuring the eye and head movements of mice as they chase and capture insects. In response, Holmgren, Stahr et al. developed a new technique to record the precise eye positions, head rotations and prey location of mice hunting crickets in surroundings that were fully digitized at high resolution. Combining this information allowed the team to mathematically recreate what mice would see as they chased the insects, and to assess what part of their large visual field they were using. This revealed that, once a cricket had entered any part of the mices large field of view, the rodents shifted their head - but not their eyes - to bring the prey into both eye views, and then ran directly at it. If the insect escaped, the mice repeated that behavior. During the pursuit, the crickets position was mainly held in a small area of the mouses view that corresponds to a specialized region in the eye which is thought to help track objects. This region also allowed the least motion-induced image blur when the animals were running forward. The approach developed by Holmgren, Stahr et al. gives a direct insight into what animals see when they hunt, and how this constantly changing view ties to what happens in the eyes. This method could be applied to other species, ushering in a new wave of tools to explore what freely moving animals see, and the relationship between behaviour and neural circuitry

    Adaptive Movement Compensation for In Vivo Imaging of Fast Cellular Dynamics within a Moving Tissue

    Get PDF
    In vivo non-linear optical microscopy has been essential to advance our knowledge of how intact biological systems work. It has been particularly enabling to decipher fast spatiotemporal cellular dynamics in neural networks. The power of the technique stems from its optical sectioning capability that in turn also limits its application to essentially immobile tissue. Only tissue not affected by movement or in which movement can be physically constrained can be imaged fast enough to conduct functional studies at high temporal resolution. Here, we show dynamic two-photon Ca2+ imaging in the spinal cord of a living rat at millisecond time scale, free of motion artifacts using an optical stabilization system. We describe a fast, non-contact adaptive movement compensation approach, applicable to rough and weakly reflective surfaces, allowing real-time functional imaging from intrinsically moving tissue in live animals. The strategy involves enslaving the position of the microscope objective to that of the tissue surface in real-time through optical monitoring and a closed feedback loop. The performance of the system allows for efficient image locking even in conditions of random or irregular movements
    • …
    corecore